Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Algorithms are presented that efficiently shape the parity bits of systematic irregular repeat-accumulate (IRA) low-density parity-check (LDPC) codes by following the sequential encoding order of the accumulator. Simulations over additive white Gaussian noise (AWGN) channels with on-off keying show a gain of up to 0.9 dB over uniform signaling.more » « less
-
Convolutional codes are widely used in many applications. The encoders can be implemented with a simple circuit. Decoding is often accomplished by the Viterbi algorithm or the maximum a-posteriori decoder of Bahl et al. These algorithms are sequential in nature, requiring a decoding time proportional to the message length. For low latency applications this this latency might be problematic. This paper introduces a low-latency decoder for tail-biting convolutional codes TBCCs that processes multiple trellis stages in parallel. The new decoder is designed for hardware with parallel processing capabilities. The overall decoding latency is proportional to the log of the message length. The new decoding architecture is modified into a list decoder, and the list decoding performance can be enhanced by exploiting linearity to expand the search space. Certain modifications to standard TBCCs are supported by the new architecture and improve frame error rate performance.more » « lessFree, publicly-accessible full text available March 10, 2026
-
The Consultative Committee for Space Data Systems (CCSDS) standard for high photon efficiency uses a serially-concatenated (SC) code to encode pulse position modulated laser light. A convolutional encoder serves as the outer code and an accumulator serves as the inner code. These two component codes are connected through an interleaver. This coding scheme is called Serially Concatenated convolutionally coded Pulse Position Modulation (SCPPM) and it is used for NASA's Deep Space Optical Communications (DSOC) experiment. For traditional decoding that traverses the trellis forwards and backwards according to the Bahl Cocke Jelinek and Raviv (BCJR) algorithm, the latency is on the order of the length of the trellis, which has 10,080 stages for the rate 2/3 DSOC code. This paper presents a novel alternative approach that simultaneously processes all trellis stages, successively combining pairs of stages into a meta-stage. This approach has latency that is on the order of the log base-2 of the number of stages. The new decoder is implemented using the Compute Unified Device Architecture (CUDA) platform on an Nvidia Graphics Processing Unit (GPU). Compared to Field Programmable Gate Array (FPGA) implementations, the GPU implementation offers easier development, scalability, and portability across GPU models. The GPU implementation provides a dramatic increase in speed that facilitates more thorough simulation as well as enables a shift from FPGA to GPU processors for DSOC ground stations.more » « lessFree, publicly-accessible full text available March 1, 2026
-
Posterior matching uses variable-length encoding of the message controlled by noiseless feedback of the received symbols to achieve high rates for short average blocklengths. Traditionally, the feedback of a received symbol occurs before the next symbol is transmitted. The transmitter optimizes the next symbol transmission with full knowledge of every past received symbol. To move posterior matching closer to practical communication, this paper seeks to constrain how often feedback can be sent back to the transmitter. We focus on reducing the frequency of the feedback while still maintaining the high rates that posterior matching achieves with feedback after every symbol. As it turns out, the frequency of the feedback can be reduced significantly with no noticeable reduction in rate.more » « lessFree, publicly-accessible full text available December 8, 2025
-
Channel-Prediction-Driven Rate Control for LDPC Coding in a Fading FSO Channel With Delayed FeedbackFree-space optical (FSO) links are sensitive to channel fading caused by atmospheric turbulence, varying weather conditions, and changes in the distance between the transmitter and receiver. To mitigate FSO fading, this paper applies linear and quadratic prediction to estimate fading channel conditions and dynamically select the appropriate low-density parity check (LDPC) code rate. This adaptivity achieves reliable communication while efficiently utilizing the available channel mutual information. Protograph-based Raptor-like (PBRL) LDPC codes supporting a wide range of rates are designed, facilitating convenient rate switching. When channel state information (CSI) is known without delay, dynamically selecting LDPC code rate appropriately maximizes throughput. This work explores how such prediction behaves as the feedback delay is increased from no delay to a delay of 4 ms for a channel with a coherence time of 10 ms.more » « lessFree, publicly-accessible full text available January 1, 2026
-
This paper proposes a syndrome sphere decoding (SSD) algorithm. SSD achieves the frame error rate (FER) of maximum likelihood (ML) decoding for a tail-biting convolutional code (TBCC) concatenated with an expurgating linear function (ELF) with significantly lower average and maximum decoding complexity than serial list Viterbi decoding (SLVD). SSD begins by using a Viterbi decoder to find the closest trellis path, which may not be tail-biting and may not satisfy the ELF. This trellis path has a syndrome comprised of the difference between the received ELF and the ELF computed from the received message and the difference between the beginning and ending state of the trellis path. This syndrome is used to find all valid tail-biting codewords that satisfy the ELF constraint and lie within a specified distance from the closest trellis path. The proposed algorithm avoids the complexity of SLVD at the cost of a large table containing all the offsets needed for each syndrome. A hybrid decoder combines SSD and SLVD with a fixed maximum list size, balancing the maximum list size for SLVD against the size of the SSD offset table.more » « less
-
In the short blocklength regime, serial list decoding of tail-biting (TB) convolutional codes concatenated with an expurgating linear function (ELF) can approach the random coding union bound on frame error rate (FER) performance. Decoding complexity for a particular received word depends on how deep in the list the decoder must search to find a valid TB-ELF codeword. The average list size is close to one at low-FER operating points such as 10^−6, and serial list decoding provides a favorable average complexity compared to other decoders with similar performance for these cases. However, the average list size can be on the order of a hundred or a thousand at higher, but still practically important, FER operating points such as 10−3. It is useful to study the tradeoff between how deep the decoder is willing to search and the proximity to the frame error rate (FER) achieved by an ML decoder. Often, this tradeoff is framed in terms of a maximum list depth. However, this paper frames the tradeoff in terms of a maximum allowable metric between the received word and the trellis paths on the list. We consider metrics of Euclidean distance and angle. This new approach draws on the wealth of existing literature on bounded-metric decoding to provide characterizations of how the choice of maximum allowable metric controls the tradeoffs between FER performance and both decoding complexity and undetected error rate. These characterizations lead to an example of an ELF-TB convolutional code that outperforms recent results for polar codes in terms of the lowest SNR that simultaneously achieves both a total error rate less than T = 10^−3 and an undetected error rate below U = 10^−5.more » « less
-
This paper presents new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size M=2^k . We provide bounds for two cases: The first case considers VLSF codes with possibly infinite decoding times and zero error probability. The second case limits the maximum (finite) number of decoding times and specifies a maximum tolerable probability of error. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first k message bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy et al. in 2016. We show that the backoff from capacity reduces to zero as the erasure probability decreases, thus giving a negative answer to the open question Devassy et al. posed on whether the 23.4% backoff to capacity at k=3 is fundamental to all BECs. For VLSF codes with finite decoding times, numerical evaluations show that the systematic transmission followed by random linear fountain coding performs better than random linear coding in terms of achievable rates.more » « less
-
For a two-variance model of the Flash read channel that degrades as a function of the number of program/erase cycles, this paper demonstrates that selecting write voltages to maximize the minimum page mutual information (MI) can increase device lifetime. In multi-level cell (MLC) Flash memory, one of four voltage levels is written to each cell, according to the values of the most-significant bit (MSB) page and the least-significant bit (LSB) page. In our model, each voltage level is then distorted by signal-dependent additive Gaussian noise that approximates the Flash read channel. When performing an initial read of a page in MLC flash, one (for LSB) or two (for MSB) bits of information are read for each cell of the page. If LDPC decoding fails after the initial read, then an enhanced-precision read is performed. This paper shows that jointly designing write voltage levels and read thresholds to maximize the minimum MI between a page and its associated initial or enhanced-precision read bits can improve LDPC decoding performance.more » « less
-
Free-space optical (FSO) links are sensitive to channel fading caused by atmospheric turbulence, varying weather conditions, and changes in the distance between the transmitter and receiver. To mitigate FSO fading, this paper applies linear and quadratic prediction to estimate fading channel conditions and dynamically select the appropriate low-density parity check (LDPC) code rate. This adaptivity achieves reliable communication while efficiently utilizing the available channel mutual information. Protograph-based Raptor-like (PBRL) LDPC codes supporting a wide range of rates are designed, facilitating convenient rate switching. When channel state information (CSI) is known without delay, dynamically selecting LDPC code rate appropriately maximizes throughput. This work explores how such prediction behaves as the feedback delay is increased from no delay to a delay of 4 ms for a channel with a coherence time of 10 ms.more » « less
An official website of the United States government
